Excitation Backprop for RNNs
نویسندگان
چکیده
Deep models are state-of-the-art for many vision tasks including video action recognition and video captioning. Models are trained to caption or classify activity in videos, but little is known about the evidence used to make such decisions. Grounding decisions made by deep networks has been studied in spatial visual content, giving more insight into model predictions for images. However, such studies are relatively lacking for models of spatiotemporal visual content – videos. In this work, we devise a formulation that simultaneously grounds evidence in space and time, in a single pass, using top-down saliency. We visualize the spatiotemporal cues that contribute to a deep model’s classification/captioning output using the model’s internal representation. Based on these spatiotemporal cues, we are able to localize segments within a video that correspond with a specific action, or phrase from a caption, without explicitly optimizing/training for these tasks.
منابع مشابه
Learning Many Related Tasks at the Same Time with Backpropagation
Hinton [6] proposed that generalization in artificial neural nets should improve if nets learn to represent the domain's underlying regularities . Abu-Mustafa's hints work [1] shows that the outputs of a backprop net can be used as inputs through which domainspecific information can be given to the net . We extend these ideas by showing that a backprop net learning many related tasks at the sam...
متن کامل2 MECHANISMS OF MULTITASK BACKPROPWe
Hinton 6] proposed that generalization in artiicial neural nets should improve if nets learn to represent the domain's underlying regularities. Abu-Mustafa's hints work 1] shows that the outputs of a backprop net can be used as inputs through which domain-speciic information can be given to the net. We extend these ideas by showing that a backprop net learning many related tasks at the same tim...
متن کاملDeep Learning: Autodiff, Parameter Tying and Backprop Through Time∗
How to do parameter tying and how this relates to Backprop through time.
متن کاملKickback Cuts Backprop's Red-Tape: Biologically Plausible Credit Assignment in Neural Networks
Error backpropagation is an extremely effective algorithm for assigning credit in artificial neural networks. However, weight updates under Backprop depend on lengthy recursive computations and require separate output and error messages – features not shared by biological neurons, that are perhaps unnecessary. In this paper, we revisit Backprop and the credit assignment problem. We first decomp...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1711.06778 شماره
صفحات -
تاریخ انتشار 2017